Robust Convolutional Neural Networks under Adversarial Noise
نویسندگان
چکیده
Recent studies have shown that Convolutional Neural Networks (CNNs) are vulnerable to a small perturbation of input called “adversarial examples”. In this work, we propose a new feedforward CNN that improves robustness in the presence of adversarial noise. Our model uses stochastic additive noise added to the input image and to the CNN models. The proposed model operates in conjunction with a CNN trained with standard backpropagation algorithm. In particular, convolution, max-pooling, and ReLU layers are modified to benefit from the noise model. Our model is parameterized by only a mean and variance per pixel which simplifies computations and makes our method scalable to a deep architecture. The proposed model outperforms the standard CNN by 13.12% on ImageNet and 7.37% on CIFAR-10 under adversarial noise at the expense of 0.28% of accuracy drop when used in the original dataset – with no added noise.
منابع مشابه
Automatic Colorization with Deep Convolutional Generative Adversarial Networks
We attempt to use DCGANs (deep convolutional generative adversarial nets) to tackle the automatic colorization of black and white photos to combat the tendency for vanilla neural nets to ”average out” the results. We construct a small feed-forward convolutional neural network as a baseline colorization system. We train the baseline model on the CIFAR-10 dataset with a per-pixel Euclidean loss f...
متن کاملRobust Training under Linguistic Adversity
Deep neural networks have achieved remarkable results across many language processing tasks, however they have been shown to be susceptible to overfitting and highly sensitive to noise, including adversarial attacks. In this work, we propose a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time. We consider several...
متن کاملLearning Robust Representations of Text
Deep neural networks have achieved remarkable results across many language processing tasks, however these methods are highly sensitive to noise and adversarial attacks. We present a regularization based method for limiting network sensitivity to its inputs, inspired by ideas from computer vision, thus learning models that are more robust. Empirical evaluation over a range of sentiment datasets...
متن کاملBuilding Robust Deep Neural Networks for Road Sign Detection
Deep Neural Networks are built to generalize outside of training set in mind by using techniques such as regularization, early stopping and dropout. But considerations to make them more resilient to adversarial examples are rarely taken. As deep neural networks become more prevalent in mission critical and real time systems, miscreants start to attack them by intentionally making deep neural ne...
متن کاملHypernetworks with Statistical Filtering for Defending Adversarial Examples
Deep learning algorithms have been known to be vulnerable to adversarial perturbations in various tasks such as image classification. This problem was addressed by employing several defense methods for detection and rejection of particular types of attacks. However, training and manipulating networks according to particular defense schemes increases computational complexity of the learning algo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1511.06306 شماره
صفحات -
تاریخ انتشار 2015